Facial motion capture

Facial Motion Capture is the process of electronically converting the movements of a person's face into a digital database using cameras or laser scanners. This database may then be used to produce CG (computer graphics) computer animation for movies, games, or real-time avatars. Because the motion of CG characters is derived from the movements of real people, it results in more realistic and nuanced computer character animation than if the animation were created manually.

A facial motion capture database describes the coordinates or relative positions of reference points on the actor's face. The capture may be in two dimensions, in which case the capture process is sometimes called "expression tracking", or in three dimensions. Two dimensional capture can be achieved using a single camera and low cost capture software such as Zign Creations' Zign Track. This produces less sophisticated tracking, and is unable to fully capture three dimensional motions such as head rotation. Three dimensional capture is accomplished using multi-camera rigs or laser marker system. Such systems are typically far more expensive, complicated, and time-consuming to use.

Facial Motion Capture is related to body motion capture, but is more challenging due to the higher resolution requirements to detect and track subtle expressions possible from small movements of the eyes and lips. These movements are often less than a few millimeters, requiring even greater resolution and fidelity and different filtering techniques than usually used in full body capture. The additional constraints of the face also allow more opportunities for using models and rules.

Two predominate technologies exist; marker and markerless tracking systems.

Contents

History

One of the first papers discussing performance-driven animation was published by Lance Williams in 1990. There, he describes 'a means of acquiring the expressions of realfaces, and applying them to computer-generated faces'. [1]

Marker-based

Traditional marker based systems apply up to 350 markers to the actors face and track the marker movement with high resolution cameras. This has been used on movies such as The Polar Express and Beowulf to allow an actor such as Tom Hanks to drive the facial expressions of several different characters. Unfortunately this is relatively cumbersome and makes the actors expressions overly driven once the smoothing and filtering have taken place. Next generation systems such as CaptiveMotion utilize offshoots of the traditional marker based system with higher levels of details.

Active LED Marker technology is currently being used to drive facial animation in real-time to provide user feedback.

Markerless

Markerless technologies use the features of the face such as nostrils, the corners of the lips and eyes, and wrinkles and then track them. This technology is discussed and demonstrated at CMU [2], IBM [3], University of Manchester (where much of this started with Tim Cootes[4], Gareth Edwards and Chris Taylor) and other locations, using active appearance models, principal component analysis, eigen tracking, deformable surface models and other techniques to track the desired facial features from frame to frame. This technology is much less cumbersome, and allows greater expression for the actor.

These vision based approaches also have the ability to track pupil movement, eyelids, teeth occlusion by the lips and tongue, which are obvious problems in most computer animated features. Typical limitations of vision based approaches are resolution and frame rate, both of which are decreasing as issues as high speed, high resolution CMOS cameras become available from multiple sources.

The technology for markerless face tracking is related to that in a Facial recognition system, since a facial recognition system can potentially be applied sequentially to each frame of video, resulting in face tracking. For example, the Neven Vision system[5] (formerly Eyematics, now acquired by Google) allowed real-time 2D face tracking with no person-specific training; their system was also amongst the best-performing facial recognition systems in the U.S. Government's 2002 Facial Recognition Vendor Test (FRVT). On the other hand some recognition systems do not explicitly track expressions or even fail on non-neutral expressions, and so are not suitable for tracking. Conversely, systems such as deformable surface models pool temporal information to disambiguate and obtain more robust results, and thus could not be applied from a single photograph.

Markerless face tracking has progressed to commercial systems such as image-metrics and has been applied in movies such as The Matrix sequels[6] and The Curious Case of Benjamin Button. The latter used the Mova Contour system to capture a deformable facial model, which was then animated with a combination of manual and vision tracking[7]. Avatar was another prominent performance capture movie however it used painted markers rather than being markerless.

Markerless systems can be classified according to several distinguishing criteria:

To date, no system is ideal with respect to all these criteria. For example the Neven Vision system was fully automatic and required no hidden patterns or per-person training, but was 2D. The Face/Off system[8] is 3D, automatic, and real-time but requires projected patterns.

References

  1. ^ Performance-Driven Facial Animation, Lance Williams, Computer Graphics, Volume 24, Number 4, August 1990
  2. ^ AAM Fitting Algorithms from the Carnegie Mellon Robotics Institute
  3. ^ Real World Real-time Automatic Recognition of Facial Expressions
  4. ^ Modelling and Search Software ("This document describes how to build, display and use statistical appearance models.")
  5. ^ Wiskott, Laurenz; J.-M. Fellous, N. kruger, C. von der Malsurg (1997), "Face recognition by elastic bunch graph matching", Lecture Notes in Computer Science (Springer) 1296: 456–463, doi:10.1007/3-540-63460-6_150 
  6. ^ Borshukov, George; D. Piponi, O. Larsen, J. Lewis, C. Templelaar-Lietz, C. (2003), "Universal Capture - Image-based Facial Animation for "The Matrix Reloaded"", ACM SIGGRAPH 
  7. ^ Barba,, Eric; Steve Preeg (18 March 2009), "The Curious Face of Benjamin Button", Presentation at Vancouver ACM Siggraph chapter, 18 March 2009. 
  8. ^ Weise,, Thibaut; H. Li, L. Van Gool, M. Pauly, (2009), "Face/off: Live Facial Puppetry", ACM Symposium on Computer Animation